14,611 research outputs found

    Optimisation of out-vessel magnetic diagnostics for plasma boundary reconstruction in tokamaks

    Full text link
    To improve the low frequency spectrum of magnetic field measurements of future tokamak reactors such as ITER, several steady state magnetic sensor technologies have been considered. For all the studied technologies it is always advantageous to place the sensors outside the vacuum vessel and as far away from the reactor core to minimize radiation damage and temperature effects, but not so far as to compromise the accuracy of the equilibrium reconstruction. We have studied to what extent increasing the distance between out-vessel sensors and plasma can be compensated for sensor accuracy and/or density before the limit imposed by the degeneracy of the problem is reached. The study is particularized for the Swiss TCV tokamak, due to the quality of its magnetic data and its ability to operate with a wide range of plasma shapes and divertor configurations. We have scanned the plasma boundary reconstruction error as function of out-vessel sensor density, accuracy and distance to the plasma. The study is performed for both the transient and steady state phases of the tokamak discharge. We find that, in general, there is a broad region in the parameter space where sensor accuracy, density and proximity to the plasma can be traded for one another to obtain a desired level of accuracy in the reconstructed boundary, up to some limit. Extrapolation of the results to a tokamak reactor suggests that a hybrid configuration with sensors inside and outside the vacuum vessel could be used to obtain a good boundary reconstruction during both the transient and the flat-top of the discharges, if out-vessel magnetic sensors of sufficient density and accuracy can be placed sufficiently far outside the vessel to minimize radiation damage.Comment: 36 pages, 17 figures, Accepted for publication in Nuclear Fusio

    Optimal Policy Projections

    Get PDF
    We outline a method to provide advice on optimal monetary policy while taking policymakers’ judgment into account. The method constructs optimal policy projections (OPPs) by extracting the judgment terms that allow a model, such as the Federal Reserve Board staff economic model, FRB/US, to reproduce a forecast, such as the Greenbook forecast. Given an intertemporal loss function that represents monetary policy objectives, OPPs are the projections — of target variables, instruments, and other variables of interest — that minimize that loss function for given judgment terms. The method is illustrated by revisiting the economy of early 1997 as seen in the Greenbook forecasts of February 1997 and November 1999. In both cases, we use the vintage of the FRB/US model that was in place at that time. These two particular forecasts were chosen, in part, because they were at the beginning and the peak, respectively, of the late 1990s boom period. As such, they differ markedly in their implied judgments of the state of the world in 1997 and our OPPs illustrate this difference. For a conventional loss function, our OPPs provide significantly better performance than Taylor-rule simulations.

    The Theorems of International Trade with Factor Mobility

    Get PDF
    This paper addresses the relation between goods trade and international factor mobility in general terms. Conditions for factor price equalization are derived for situations with tradein both goods and factors,as well as Rybczynski and Stolper-Sarnuel Sofl theorems. A weak price versionof the Heckscher-Ohlifl theorem is presented, as well as stronger quantity versions.The basic theorems of international trade, suitably interpreted,are shown to hold in their strong versions ifthe number of international markets is at least as large as the number of factors.The crucial dimensionality issue is hence not the relative number of goods and factors per se, but the number of international markets relative to the number of factors. Only the price version of the Heckscher-Ohlifl theorem fails to be essentially preserved by this condition.

    Optimal Policy Projections

    Get PDF
    We outline a method to provide advice on optimal monetary policy while taking policymakers' judgment into account. The method constructs Optimal Policy Projections (OPPs) by extracting the judgment terms that allow a model, such as the Federal Reserve Board's FRB/US model, to reproduce a forecast, such as the Greenbook forecast. Given an intertemporal loss function that represents monetary policy objectives, OPPs are the projections - of target variables, instruments, and other variables of interest -that minimize that loss function for given judgment terms. The method is illustrated by revisiting the Greenbook forecasts of February 1997 and November 1999, in each case using the vintage of the FRB/US model that was in place at that time. These two particular forecasts were chosen, in part, because they were at the beginning and the peak, respectively, of the late 1990s boom period. As such, they differ markedly in their implied judgments of the state of the world, and our OPPs illustrate this difference. For a conventional loss function, our OPPs provide significantly better performance than Taylor-rule simulations.

    Evidence cross-validation and Bayesian inference of MAST plasma equilibria

    Get PDF
    In this paper, current profiles for plasma discharges on the Mega-Ampere Spherical Tokamak (MAST) are directly calculated from pickup coil, flux loop and Motional-Stark Effect (MSE) observations via methods based in the statistical theory of Bayesian analysis. By representing toroidal plasma current as a series of axisymmetric current beams with rectangular cross-section and inferring the current for each one of these beams, flux-surface geometry and q-profiles are subsequently calculated by elementary application of Biot-Savart's law. The use of this plasma model in the context of Bayesian analysis was pioneered by Svensson and Werner on the Joint-European Tokamak (JET) [J. Svensson and A. Werner. Current tomography for axisymmetric plasmas. Plasma Physics and Controlled Fusion, 50(8):085002, 2008]. In this framework, linear forward models are used to generate diagnostic predictions, and the probability distribution for the currents in the collection of plasma beams was subsequently calculated directly via application of Bayes' formula. In this work, we introduce a new diagnostic technique to identify and remove outlier observations associated with diagnostics falling out of calibration or suffering from an unidentified malfunction. These modifications enable good agreement between Bayesian inference of the last closed flux-surface (LCFS) with other corroborating data, such as such as that from force balance considerations using EFIT++ [L. Appel et al., Proc. 33rd EPS Conf., Rome, Italy, 2006]. In addition, this analysis also yields errors on the plasma current profile and flux-surface geometry, as well as directly predicting the Shafranov shift of the plasma core.This work was jointly funded by the Australian Government through International Science Linkages Grant No. CG130047, the Australian National University, the United Kingdom Engineering and Physical Sciences Research Council under Grant No. EP/G003955, and by the European Communities under the contract of Association between EURATOM and CCFE

    Steep Slopes and Preferred Breaks in GRB Spectra: the Role of Photospheres and Comptonization

    Get PDF
    The role of a photospheric component and of pair breakdown is examined in the internal shock model of gamma-ray bursts. We discuss some of the mechanisms by which they would produce anomalously steep low energy slopes, X-ray excesses and preferred energy breaks. Sub-relativistic comptonization should dominate in high comoving luminosity bursts with high baryon load, while synchrotron radiation dominates the power law component in bursts which have lower comoving luminosity or have moderate to low baryon loads. A photosphere leading to steep low energy spectral slopes should be prominent in the lowest baryon loadComment: ApJ'00, in press; minor revs. 10/5/99; (uses aaspp4.sty), 15 pages, 3 figure

    Extending the range of error estimates for radial approximation in Euclidean space and on spheres

    Full text link
    We adapt Schaback's error doubling trick [R. Schaback. Improved error bounds for scattered data interpolation by radial basis functions. Math. Comp., 68(225):201--216, 1999.] to give error estimates for radial interpolation of functions with smoothness lying (in some sense) between that of the usual native space and the subspace with double the smoothness. We do this for both bounded subsets of R^d and spheres. As a step on the way to our ultimate goal we also show convergence of pseudoderivatives of the interpolation error.Comment: 10 page

    Infinite Factorial Finite State Machine for Blind Multiuser Channel Estimation

    Full text link
    New communication standards need to deal with machine-to-machine communications, in which users may start or stop transmitting at any time in an asynchronous manner. Thus, the number of users is an unknown and time-varying parameter that needs to be accurately estimated in order to properly recover the symbols transmitted by all users in the system. In this paper, we address the problem of joint channel parameter and data estimation in a multiuser communication channel in which the number of transmitters is not known. For that purpose, we develop the infinite factorial finite state machine model, a Bayesian nonparametric model based on the Markov Indian buffet that allows for an unbounded number of transmitters with arbitrary channel length. We propose an inference algorithm that makes use of slice sampling and particle Gibbs with ancestor sampling. Our approach is fully blind as it does not require a prior channel estimation step, prior knowledge of the number of transmitters, or any signaling information. Our experimental results, loosely based on the LTE random access channel, show that the proposed approach can effectively recover the data-generating process for a wide range of scenarios, with varying number of transmitters, number of receivers, constellation order, channel length, and signal-to-noise ratio.Comment: 15 pages, 15 figure

    Laws as Assets: A Possible Solution to the Time Consistency Problem

    Get PDF
    This paper presents a new solution to the time-consistency problem that appears capable of enforcing ex ante policy in a variety of settings in which other enforcement mechanisms do not work. The solution involves formulating a law, institution, or agreement that specifies the optimal ex ante policy and that can be sold by successive old generations to successive young generations. Each young generation pays for the law through the payment of taxes. Both old and young generations have an economic incentive to obey the law. For the old generation that owns the law, breaking the law makes the law valueless, and the generation suffers a capital loss. For the young generation the economic advantage of purchasing the existing law exceeds its cost as well as the economic gain from setting up the law.
    corecore